Protect privacy of deep classification networks by exploiting their generative power

نویسندگان

چکیده

Abstract Research showed that deep learning models are vulnerable to membership inference attacks, which aim determine if an example is in the training set of model. We propose a new framework defend against this sort attack. Our key insight we retrain original classifier with dataset independent while their elements sampled from same distribution, retrained will leak no information cannot be inferred distribution about set. consists three phases. First, transferred Joint Energy-based Model (JEM) exploit model’s implicit generative power. Then, JEM create dataset. Finally, used or fine-tune classifier. empirically studied different transfer schemes for and fine-tuning/retraining strategies shadow-model attacks. evaluation shows our can suppress attacker’s advantage negligible level keeping classifier’s accuracy acceptable. compared it other state-of-the-art defenses considering adaptive attackers defense effective even under worst-case scenario. Besides, also found combining often achieves better robustness. code made available at https://github.com/ChenJiyu/meminf-defense.git .

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Privacy-preserving generative deep neural networks support clinical data sharing

Though it is widely recognized that data sharing enables faster scientific progress, the sensible need to protect participant privacy hampers this practice in medicine. We train deep neural networks that generate synthetic subjects closely resembling study participants. Using the SPRINT trial as an example, we show that machine-learning models built from simulated participants generalize to the...

متن کامل

Deep Generative Stochastic Networks Trainable by Backprop

We introduce a novel training principle for probabilistic models that is an alternative to maximum likelihood. The proposed Generative Stochastic Networks (GSN) framework is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. The transition distribution of the Markov chain is conditional on the previous state, generally invo...

متن کامل

Part Localization by Exploiting Deep Convolutional Networks

Deep convolutional neural networks have shown an amazing ability to learn object category models from large-scale data. In this paper, we present a novel approach for part discovery and detection with a pre-trained convolutional neural network. It is based on analyzing gradients of intermediate layer outputs and locating areas containing large gradients. By comparing these with ground-truth par...

متن کامل

Generative learning for deep networks

Learning, taking into account full distribution of the data, referred to as generative, is not feasible with deep neural networks (DNNs) because they model only the conditional distribution of the outputs given the inputs. Current solutions are either based on joint probability models facing difficult estimation problems or learn two separate networks, mapping inputs to outputs (recognition) an...

متن کامل

Generalization of Deep Neural Networks for Chest Pathology Classification in X-Rays Using Generative Adversarial Networks

Medical datasets are often highly imbalanced with overrepresentation of common medical problems and a paucity of data from rare conditions. We propose simulation of pathology in images to overcome the above limitations. Using chest X-rays as a model medical image, we implement a generative adversarial network (GAN) to create artificial images based upon a modest sized labeled dataset. We employ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Machine Learning

سال: 2021

ISSN: ['0885-6125', '1573-0565']

DOI: https://doi.org/10.1007/s10994-021-05951-6